Learn how to create your own SolAR project using QT Creator. This tutorial will take approximately 15 minutes to complete.
Open QTCreator and create a new project (in File menu).
Choose "Non-Qt project" and "Plain C++ Application", set the location and the name of your project (for example SolARFiducialTutorial), and select as build system "qmake", and then the building kit of your choice. Finally, click on finish to create your project.
Please note you can put your SolarProjects in the same directory as your SolAR setup (at the same levels as samples).
Create in your project folder a file called packagedependencies.txt and add in it your dependencies, namely the basic dependencies required by the SolARFramework (SolARFramework, xpcf, boost, eigen and spdlog) as well as the one required for the openCV module (SolARModuleOpencv and opencv):
xpcf|1.0.0|xpcf|thirdParties|http://repository.b-com.com/
boost|1.64.0|boost|thirdParties|http://repository.b-com.com/
opencv|3.2.0|opencv|thirdParties|http://repository.b-com.com/
spdlog|0.14.0|spdlog|thirdParties|http://repository.b-com.com/amc-generic
eigen|3.3.4|eigen|thirdParties|http://repository.b-com.com/amc-generic
SolARFramework|0.4.0|SolARFramework|bcomBuild|url_repo_artifactory
SolARModuleOpenCV|0.4.0|SolARModuleOpenCV|bcomBuild|url_repo_artifactory
SolARModuleTools|0.4.0|SolARModuleTools|bcomBuild|url_repo_artifactory
| you can copy/paste the "packagedependencies.txt" from a sample code project, if you prefer. |
SolAR framework uses tools for building that can be invoked through the .pro file in your QT project. Please copy/paste the following, and replace by your project name
QT += core
QT -= gui
CONFIG += c++11
CONFIG -= qt
CONFIG += console
CONFIG -= app_bundle
TARGET = MyHelloWorldProject
TEMPLATE = app
SOURCES += \
main.cpp
# The following define makes your compiler emit warnings if you use
# any feature of Qt which as been marked deprecated (the exact warnings
# depend on your compiler). Please consult the documentation of the
# deprecated API in order to know how to port your code away from it.
DEFINES += QT_DEPRECATED_WARNINGS
# You can also make your code fail to compile if you use deprecated APIs.
# In order to do so, uncomment the following line.
# You can also select to disable deprecated APIs only up to a certain version of Qt.
#DEFINES += QT_DISABLE_DEPRECATED_BEFORE=0x060000 # disables all the APIs deprecated before Qt 6.0.0
HEADERS +=
DEFINES += MYVERSION=$${VERSION}
CONFIG(debug,debug|release) {
DEFINES += _DEBUG=1
DEFINES += DEBUG=1
}
CONFIG(release,debug|release) {
DEFINES += NDEBUG=1
}
win32:CONFIG -= static
win32:CONFIG += shared
QMAKE_TARGET.arch = x86_64 #must be defined prior to include
#NOTE : CONFIG as staticlib or sharedlib, QMAKE_TARGET.arch and PROJECTDEPLOYDIR MUST BE DEFINED BEFORE templatelibconfig.pri inclusion
include ( $$(BCOMDEVROOT)/builddefs/qmake/templateappconfig.pri)
DEPENDENCIESCONFIG = sharedlib
#NOTE : DEPENDENCIESCONFIG as staticlib or sharedlib, QMAKE_TARGET.arch and PROJECTDEPLOYDIR MUST BE DEFINED BEFORE packagedependencies.pri inclusion
#include ( $$(BCOMDEVROOT)/builddefs/qmake/packagedependencies.pri)
unix {
LIBS += -ldl
}
macx {
QMAKE_MAC_SDK= macosx
QMAKE_CXXFLAGS += -fasm-blocks -x objective-c++
}
win32 {
QMAKE_LFLAGS += /MACHINE:X64
DEFINES += WIN64 UNICODE _UNICODE
QMAKE_COMPILER_DEFINES += _WIN64
QMAKE_CXXFLAGS += -wd4250 -wd4251 -wd4244 -wd4275
# Windows Kit (msvc2013 64)
LIBS += -L$$(WINDOWSSDKDIR)lib/winv6.3/um/x64 -lshell32 -lgdi32 -lComdlg32
INCLUDEPATH += $$(WINDOWSSDKDIR)lib/winv6.3/um/x64
}
Run Qmake. In QT creator, your project explorer should look like this:
Then build your project to be sure it is correctly configured.
Compile it now.
This tutorial will help you to write a simple program that will display an image in a window. The tutorial will take approximately 15mn to complete.
Declare and instantiate SolAR components.
Declare SolAR data structures.
Create your first SolAR pipeline based on a loop.
The hello world pipeline is very simple. It consists in two OpenCV components, the first one will load an image, the second one will display it in a window.
To start, please replace the code of your main.cpp by the following one:
// ADD HERE:: header files of the components you want to use
using namespace SolAR;
using namespace SolAR::MODULES::OPENCV;
namespace xpcf = org::bcom::xpcf;
void main(int argc,char** argv){
// To redirect log to the console
LOG_ADD_LOG_TO_CONSOLE();
// ADD HERE: declarations and instantiation of components
// see API ref on SolAR website
// Example to declare and create a camera:
// SRef<SolARCameraOpencv> camera = xpcf::utils::make_shared<SolARCameraopencv>());
// ADD HERE: declarations of data structures used to connect components
// Example to declare a SolARImage:
// SRef<SolAR::datastructure::SolARImage> inputImage;
// ADD HERE: Find a way to load an image
// ADD HERE: Find a way to display your image in a window
}
The executable should take as the only parameter the url of the image you want to display. First, you will need to include the components header files, here the two header files for SolARImageLoaderOpencv and SolARImageViewerOpencv.
// ADD HERE:: header files of the components you want to use
#include "SolARImageLoaderOpencv.h"
#include "SolARImageViewerOpencv.h"
Then, you need to declare and instantiate your components. To do it, you can use the xpcf framework provided with SolAR. It offers a tool to easily instantiate your components (xpcf::utils::createcomponent). Why using xpcf here ? Because it offers more safety in the handling of your components. Thanks to the provided shared reference called SRef, do not worry about memory management anymore. You will see that you do not have to delete your components and data structures anymore.
// ADD HERE: declarations and instantiation of components
// see API ref on SolAR website
// Example to declare and create a camera:
// SRef<SolARCameraOpencv> camera = xpcf::utils::make_shared<SolARCameraopencv>());
SRef<SolARImageLoaderOpencv> myImageLoader = xpcf::utils::make_shared<SolARImageLoaderOpencv>();
SRef<SolARImageViewerOpencv> myImageViewer = xpcf::utils::make_shared<SolARImageViewerOpencv>();
Great, your components are now instantiated. Now, you need to declare the data structure required for exchanging information between the components of your pipeline. This data structure corresponds to the arc of the pipeline schema presented above. Here, you need to declare an Image data structure to transmit the image from the SolARImageLoader component to the SolARImageViewer.
// ADD HERE: declarations of data structures used to connect components
// Example to declare a Keypoint:
// SRef<SolAR::datastructure::Keypoint> keypoint;
SRef<SolAR::datastructure::Image> myImage;
You are now ready to start your pipeline. You can connect your components by loading an image, recovering it, and passing it to the Image viewer. Do not forget to test if your image has been well loaded. if not, send an info log message by calling the macro LOG_INFO("info message")
// ADD HERE: Find a way to load an image
if (myImageLoader->loadImage("HelloWorld.png", myImage) == SolAR::FrameworkReturnCode::_ERROR_LOAD_IMAGE)
{
LOG_INFO("load image KO");
return;
}
else
{
LOG_INFO("load image OK");
}
// ADD HERE: Find a way to display your image in a window
myImageViewer->display("image display", myImage);
You can test your program with the following hello world image, just save it under HelloWorld.png.
Do not forget to set the path to your image as parameter of your executable (In QT creator, click on Projects, Run, and set the path of your image in Command line arguments).
You can click on run. If you have an eagle eye, perhaps you have seen the hello world image that has appeared in a window and has immediately disappeared. Why ? The display function of your ImageViewer component displays the image only once. If you want to let it displayed until you press a key, you need to create your first loop that will include your pipeline. To do it, you can use the display method but with a third parameter that corresponds to the key to press to close the viewer window. For example, you can use the escape key (code 27). If this key is pressed, the display function will return FrameworkReturnCode::_STOP.
// ADD HERE:: header files of the components you want to use
#include "SolARImageLoaderOpencv.h"
#include "SolARImageViewerOpencv.h"
using namespace SolAR;
using namespace SolAR::MODULES::OPENCV;
namespace xpcf = org::bcom::xpcf;
void main(int argc,char** argv){
// To redirect log to the console
LOG_ADD_LOG_TO_CONSOLE();
// ADD HERE: declarations and instantiation of components
// see API ref on SolAR website
// Example to declare and create a camera:
// SRef<SolARCameraOpencv> camera = xpcf::utils::make_shared<SolARCameraopencv>());
SRef<SolARImageLoaderOpencv> myImageLoader = xpcf::utils::make_shared<SolARImageLoaderOpencv>();
SRef<SolARImageViewerOpencv> myImageViewer = xpcf::utils::make_shared<SolARImageViewerOpencv>();
// ADD HERE: declarations of data structures used to connect components
// Example to declare a SolARImage:
// SRef<SolAR::datastructure::SolARImage> inputImage;
SRef<SolAR::datastructure::Image> myImage;
// ADD HERE: Find a way to load an image
if (myImageLoader->loadImage("HelloWorld.png", myImage) == SolAR::FrameworkReturnCode::_ERROR_LOAD_IMAGE)
{
LOG_INFO("load image KO");
return;
}
else
{
LOG_INFO("load image OK");
}
// ADD HERE: Find a way to display your image in a window
// The escape key to exit the sample
char escape_key = 27;
bool process = true;
while (process)
{
if (myImageViewer->display("image display", myImage, &escape_key) == FrameworkReturnCode::_STOP)
{
process = false;
}
}
}
Congratulation, you have implement your first SolAR pipeline. It is quite simple, but now, you know all you need to develop your own pipeline: the components declaration and instantiation, the data structures declaration, and finally the pipeline implementation based on a loop.
This tutorial will help you to write a simple program that will detect keypoints in an image and display them in a window. The tutorial will take approximately 15mn to complete.
A PC configured with the SolAR framework (see setup.)
A PC configured with a C++ IDE (please note QT creator is recommended) (see setup.).
You have followed the first tutorial on how to create and build your own SolAR project followed by the second tutorial hello world.
Detect keypoints in an image
Display a set of 2D Points over an image
This keypoints detection pipeline consists in four SolAR components based on OpenCV (in red, the components used in the previous tutorial) .
| the same pipeline will probably be offered with other components implementations, in the future, when SolAR will offer more modules.“ |
SolARImageLoaderOpencv (interface SolAR::api::image::IImageLoader ) will load your reference image (keep the one used in the previous tutorial).
SolARKeypoinsDetectorOpencv (interface SolAR::api::features::IKeypointDetector) will detect the keypoints in your image and put them in a vector.
SolAR2DOverlayOpencv (interface SolAR::api::display::I2DOverlay) will take the reference image and will draw the keypoints over.
SolARImageViewerOpencv (interface SolAR::api::display::IImageViewer) will display the reference image with keypoints in a window.
For this tutorial, you can copy and paste in your main.cpp the following code mainly based on the previous tutorial, namely Hello World:
// ADD HERE:: header files of the components you want to use
#include "SolARImageLoaderOpencv.h"
#include "SolARImageViewerOpencv.h"
using namespace SolAR;
using namespace SolAR::MODULES::OPENCV;
namespace xpcf = org::bcom::xpcf;
void main(int argc,char** argv){
// To redirect log to the console
LOG_ADD_LOG_TO_CONSOLE();
// ADD HERE: declarations and instantiation of components
// see API ref on SolAR website
// Example to declare and create a camera:
// SRef<SolARCameraOpencv> camera = xpcf::utils::make_shared<SolARCameraopencv>());
SRef<SolARImageLoaderOpencv> myImageLoader = xpcf::utils::make_shared<SolARImageLoaderOpencv>();
SRef<SolARImageViewerOpencv> myImageViewer = xpcf::utils::make_shared<SolARImageViewerOpencv>();
// ADD HERE: declarations of data structures used to connect components
// Example to declare a SolARImage:
// SRef<SolAR::datastructure::SolARImage> inputImage;
SRef<SolAR::datastructure::Image> myImage;
// ADD HERE: Find a way to load an image
if (myImageLoader->loadImage(argv[1], myImage) == SolAR::FrameworkReturnCode::_ERROR_)
{
LOG_INFO("load image KO");
}
else
{
LOG_INFO("load image OK");
}
// ADD HERE : Find a way to detect keypoints
//we choose AKAZE2 because it is IP free
// ADD HERE : Find a way to draw keypoints over the original image
// ADD HERE: Find a way to display your image in a window
// The escape key to exit the sample
char escape_key = 27;
bool process = true;
while (process)
{
if (myImageViewer->display("image display", myImage, &escape_key) == FrameworkReturnCode::_STOP)
{
process = false;
}
}
}
In this code, you already have two components on the four required to detect keypoints, namely the SolARImageLoaderOpencv as well as the SolARImageViewer components. You can add the SolARKeypointsDetectorOpencv and the SolAR2DOverlayOpencv as described in blue in the pipeline schema shown above. First, add the header files:
// ADD HERE:: header files of the components you want to use
#include "SolARImageLoaderOpencv.h"
#include "SolARImageViewerOpencv.h"
#include "SolARKeypointDetectorOpencv.h"
#include "SolAR2DOverlayOpencv.h"
Then, add the declaration and the intantiation of these two new components to your code:
// ADD HERE: declarations and instantiation of components
// see API ref on SolAR website
// Example to declare and create a camera:
// SRef<SolARCameraOpencv> camera = xpcf::utils::make_shared<SolARCameraopencv>());
SRef<SolARImageLoaderOpencv> myImageLoader = xpcf::utils::make_shared<SolARImageLoaderOpencv>();
SRef<SolARImageViewerOpencv> myImageViewer = xpcf::utils::make_shared<SolARImageViewerOpencv>();
SRef<SolARKeypointDetectorOpencv> myKeypointsDetector = xpcf::utils::make_shared<SolARKeypointDetectorOpencv>();
SRef<SolAR2DOverlayOpencv> my2DOverlay = xpcf::utils::make_shared<SolAR2DOverlayOpencv>();
Then, you need to add one Keypoint data structure to transmit the keypoints between the keypoint detector and the 2D overlay components (as indicated in the pipeline above). For the SolAR2DOverlayOpencv, the image is passed as a reference, and the overlay is done on the original image without creating a new image.
| please see the datastruces details in the API documentation. |
// ADD HERE: declarations of data structures used to connect components
// Example to declare a SolARImage:
// SRef<SolAR::datastructure::SolARImage> inputImage;
SRef<SolAR::datastructure::Image> myImage;
std::vector<SRef<SolAR::datastructure::Keypoint>> myKeypoints;
Perfect ! Now, you can edit your pipeline to connect:
first the SolARImageLoaderOpenCV component to the SolARKeypointDetectionOpencv component.
secondly the SolARImageLoaderOpenCV and the SolARKeypointDetectionOpencv components to the SolAR2DOverlayOpencv component.
And finally, the SolAR2DOverlayOpencv component to the SolARImageViewerOpencv component.
// ADD HERE: Find a way to load an image which path is given as an executable parameter
if (myImageLoader->loadImage(argv[1], myImage) == SolAR::FrameworkReturnCode::_ERROR_)
{
LOG_INFO("load image KO");
}
else
{
LOG_INFO("load image OK");
}
// ADD HERE : Find a way to detect keypoints based on AKAZE2
myKeypointsDetector->setType(api::features::KeypointDetectorType::AKAZE2);
myKeypointsDetector->detect(myImage, myKeypoints);
// ADD HERE : Find a way to draw keypoints over the original image
my2DOverlay->drawCircles(myKeypoints, 3, 1, myImage);
// ADD HERE: Find a way to display your image in a window
// The escape key to exit the sample
char escape_key = 27;
bool process = true;
while (process)
{
if (myImageViewer->display("image display", myImage, &escape_key) == FrameworkReturnCode::_STOP)
{
process = false;
}
}
Do not forget to set the path to your image as parameter of your executable (In QT creator, click on Projects, Run, and set the path of your image in Command line arguments).
You can click on run. A window should appear with your image as well as e set of colored circles showing the keypoints that have been detected.
Following, the full code source of this tutorial:
// ADD HERE:: header files of the components you want to use
#include "SolARImageLoaderOpencv.h"
#include "SolARKeypointDetectorOpencv.h"
#include "SolAR2DOverlayOpencv.h"
#include "SolARImageViewerOpencv.h"
#include "SolARDescriptorsExtractorAKAZE2Opencv.h"
using namespace SolAR;
using namespace SolAR::MODULES::OPENCV;
namespace xpcf = org::bcom::xpcf;
void main(int argc,char** argv){
// To redirect log to the console
LOG_ADD_LOG_TO_CONSOLE();
// ADD HERE: declarations and instantiation of components
// see API ref on SolAR website
// Example to declare and create a camera:
// SRef<SolARCameraOpencv> camera = xpcf::utils::make_shared<SolARCameraopencv>());
SRef<SolARImageLoaderOpencv> myImageLoader = xpcf::utils::make_shared<SolARImageLoaderOpencv>();
SRef<SolARKeypointDetectorOpencv> myKeypointsDetector = xpcf::utils::make_shared<SolARKeypointDetectorOpencv>();
SRef<SolAR2DOverlayOpencv> my2DOverlay = xpcf::utils::make_shared<SolAR2DOverlayOpencv>();
SRef<SolARImageViewerOpencv> myImageViewer = xpcf::utils::make_shared<SolARImageViewerOpencv>();
// ADD HERE: declarations of data structures used to connect components
// Example to declare a SolARImage:
// SRef<SolAR::datastructure::SolARImage> inputImage;
SRef<SolAR::datastructure::Image> myImage;
std::vector<SRef<SolAR::datastructure::Keypoint>> myKeypoints;
// ADD HERE: Find a way to load an image
if (myImageLoader->loadImage(argv[1], myImage) == SolAR::FrameworkReturnCode::_ERROR_)
{
LOG_INFO("load image KO");
}
else
{
LOG_INFO("load image OK");
}
// ADD HERE : Find a way to detect keypoints
//we choose AKAZE because it is IP free
myKeypointsDetector->setType(features::KeypointDetectorType::AKAZE2);
myKeypointsDetector->detect(myImage, myKeypoints);
// ADD HERE : Find a way to draw keypoints over the original image
my2DOverlay->drawCircles(myKeypoints, 3, 1, myImage);
// ADD HERE: Find a way to display your image in a window
// The escape key to exit the sample
char escape_key = 27;
bool process = true;
while (process)
{
if (myImageViewer->display("image display", myImage, &escape_key) == FrameworkReturnCode::_STOP)
{
process = false;
}
}
}
This tutorial will help you to write a program that will match the keypoints between a reference image and the video stream of your camera. The tutorial will take approximately 30mn to complete.
A PC configured with the SolAR framework (see setup.)
A PC configured with a C++ IDE (please note QT creator is recommended) (see setup.).
You have followed the tutorial on how to detect keypoints in an image.
Use two modules together
Extract descriptors of keypoints
Match keypoints
Display the keypoints that match in a side by side window
The matching pipeline consists in 9 OpenCV components and 1 component available in the module Tools (SolARKeypointsReindexer). In red, the components used in the previous tutorial:
SolARImageLoaderOpencv (interface SolAR::api::image::IImageLoader ) will load your reference image (keep the one used in the previous tutorial).
SolARKeypointsDetectorOpencv (interface SolAR::api::features::IKeypointDetector) will detect the keypoints in your image and put them in a vector (keep the one used in the previous tutorial).
SolARDescriptorsExtractorAKAZE2Opencv (interface SolAR::api::features::IDescriptorsExtractor) will extract a descriptor for each detected keypoint. Here, we are using the AKAZE descriptor.
SolARCameraOpencv (interface SolAR::api::input::devices::ICamera) will provide the video stream captured from a camera through a set of successive images. To start the camera, use the start method with the id of the camera (generally the id 0). Finally, you can access the last image captured by the camera by calling the nextImage method.
SolARKeypointDetectorOpencv (interface SolAR::api::features::IKeypointDetector) will detect the keypoints in the current image of the camera stream.
SolARDescriptorsExtractorAKAZE2Opencv (interface SolAR::api::features::IDescriptorsExtractor) will extract a descriptor for each keypoint detect in the current image captured by the camera.
SolARDescriptorsMatcherKNNOpencv (interface SolAR::api::features::IDescriptorMatcher) will match the descriptors of keypoints detected in the reference image with the descriptors of keypoints detected in the current image of the camera stream. This component outputs a vector of matches, a match being a pair of index. The first index of the pair corresponds to the index of the descriptor (resp. keypoint) in the descriptor buffer (resp. the vector of keypoints) of the reference image, the second index of the pair corresponds to the index of the descriptor (resp. keypoint) in the descriptor buffer (resp. the vector of keypoints) of the current image of the camera stream.
SolARKeypointsReindexer (interface SolAR::api::features::IKeypointsReIndexer) will output two vectors of 2D points where the nth 2D points of the first vector match with the nth 2D point of the second vector. To do that, this component will use the two vector of keypoints extract from the two images as well as the vector of matches. You will find this component in the module called Tools.
SolARSideBySideOverlayOpencv (interface SolAR::api::display::ISideBySideOverlay) will output a side by side image based on two input images, and will display in overlay segments joining the keypoints of the first image with the keypoints of the second image that match together.
SolARImageViewerOpencv (interface SolAR::api::display::IImageViewe) will display in a window the side by side image with the matches.
For this tutorial, you can copy and paste in your main.cpp the following code mainly based on the previous tutorial, namely Detect keypoints in an image:
// ADD HERE:: header files of the components you want to use
#include "SolARImageLoaderOpencv.h"
#include "SolARKeypointDetectorOpencv.h"
#include "SolARImageViewerOpencv.h"
using namespace SolAR;
using namespace SolAR::MODULES::OPENCV;
using namespace SolAR::MODULES::TOOLS;
namespace xpcf = org::bcom::xpcf;
void main(int argc,char** argv){
// To redirect log to the console
LOG_ADD_LOG_TO_CONSOLE();
// ADD HERE: declarations and instantiation of components
// see API ref on SolAR website
// Example to declare and create a camera:
// SRef<SolARCameraOpencv> camera = xpcf::utils::make_shared<SolARCameraopencv>());
SRef<SolARImageLoaderOpencv> myImageLoader = xpcf::utils::make_shared<SolARImageLoaderOpencv>();
SRef<SolARKeypointDetectorOpencv> myKeypointsDetector = xpcf::utils::make_shared<SolARKeypointDetectorOpencv>();
SRef<SolARImageViewerOpencv> myImageViewer = xpcf::utils::make_shared<SolARImageViewerOpencv>();
// ADD HERE: declarations of data structures used to connect components
// Example to declare a SolARImage:
// SRef<SolAR::datastructure::SolARImage> inputImage;
SRef<datastructure::Image> myRefImage;
std::vector<SRef<datastructure::Keypoint>> myRefKeypoints;
// ADD HERE: Find a way to load an image
if (myImageLoader->loadImage(argv[1], myRefImage) != SolAR::FrameworkReturnCode::_SUCCESS)
{
LOG_INFO("load image KO");
}
else
{
LOG_INFO("load image OK");
}
// ADD HERE : Find a way to detect keypoints
myKeypointsDetector->setType(api::features::KeypointDetectorType::AKAZE2);
myKeypointsDetector->detect(myRefImage, myRefKeypoints);
// ADD HERE : Find a way to extract descriptors
// ADD HERE : Launch the camera
// The escape key to exit the sample
char escape_key = 27;
bool process = true;
// The pipeline loop
while (true)
{
// Get the last image returned by the camera
// ADD HERE : Find a way to detect keypoints in the current image captured by the camera
myKeypointsDetector->detect(myCamImage, myCamKeypoints);
// ADD HERE : Find a way to extract descriptors for current image captured by the camera
// ADD HERE : Find a way to match keypoints
// ADD HERE : Find a way to reindex matched keypoints
// ADD HERE : Display on a side by side image the matches
if (myImageViewer->display("Matches", mySideBySideImage, &escape_key) == FrameworkReturnCode::_STOP)
break;
}
}
In this code, you already have 3 components on the 10 required to match keypoints: * SolARImageLoaderOpencv * SolARKeypointsDetectorOpencv * SolARImageViewer. As you will need a component embedded in the module Tools, if your are using QT Creator, you will need first to add this module in the packagedependencies.txt to link your program with it, and then to run QMake.
xpcf|1.0.0|xpcf|thirdParties|http://repository.b-com.com/
boost|1.64.0|boost|thirdParties|http://repository.b-com.com/
opencv|3.2.0|opencv|thirdParties|http://repository.b-com.com/
spdlog|0.14.0|spdlog|thirdParties|http://repository.b-com.com/amc-generic
eigen|3.3.4|eigen|thirdParties|http://repository.b-com.com/amc-generic
SolARFramework|0.4.0|SolARFramework|bcomBuild|url_repo_artifactory
SolARModuleOpenCV|0.4.0|SolARModuleOpenCV|bcomBuild|url_repo_artifactory
SolARModuleTools|0.4.0|SolARModuleTools|bcomBuild|url_repo_artifactory (1)
Now, in your main program, you can add the new components as described in blue in the pipeline schema shown above. First, add the new header files, and remove the one that is no longer used:
// ADD HERE:: header files of the components you want to use
#include "SolARImageLoaderOpencv.h"
#include "SolARKeypointDetectorOpencv.h"
#include "SolARCameraOpencv.h"
#include "SolARDescriptorsExtractorAKAZE2Opencv.h"
#include "SolARDescriptorMatcherKNNOpencv.h"
#include "SolARKeypointsReIndexer.h"
#include "SolARSideBySideOverlayOpencv.h"
#include "SolARImageViewerOpencv.h"
As you are using the module Tools, do not forget to add the corresponding namespace.
using namespace SolAR::MODULES::TOOLS;
Then, add the declaration and the intantiation of the new components to your code.
| the 2 components used for the keypoints detection and the descriptors extraction can be instantiated only once and can be reused for both the reference image and the current image of the video stream. |
// ADD HERE: declarations and instantiation of components
// see API ref on SolAR website
// Example to declare and create a camera:
// SRef<SolARCameraOpencv> camera = xpcf::utils::make_shared<SolARCameraopencv>());
SRef<SolARImageLoaderOpencv> myImageLoader = xpcf::utils::make_shared<SolARImageLoaderOpencv>();
SRef<SolARKeypointDetectorOpencv> myKeypointsDetector = xpcf::utils::make_shared<SolARKeypointDetectorOpencv>();
SRef<SolARCameraOpencv> myCamera = xpcf::utils::make_shared<SolARCameraOpencv>();
SRef<SolARDescriptorsExtractorAKAZE2Opencv> myDescriptorExtractor = xpcf::utils::make_shared<SolARDescriptorsExtractorAKAZE2Opencv>();
SRef<SolARDescriptorMatcherKNNOpencv> myDescriptorMatcher = xpcf::utils::make_shared<SolARDescriptorMatcherKNNOpencv>();
SRef<SolARKeypointsReIndexer> myKeypointsReIndexer = xpcf::utils::make_shared<SolARKeypointsReIndexer>();
SRef<SolARSideBySideOverlayOpencv> mySBSOverlay = xpcf::utils::make_shared<SolARSideBySideOverlayOpencv>();
SRef<SolARImageViewerOpencv> myImageViewer = xpcf::utils::make_shared<SolARImageViewerOpencv>();
Then, if you have a look to the matching pipeline, you need to add several datastructure to transmit informations between components. You need to declare ten data structures.
// ADD HERE: declarations of data structures used to connect components
// Example to declare a SolARImage:
// SRef<SolAR::datastructure::SolARImage> inputImage;
SRef<datastructure::Image> myRefImage;
std::vector<SRef<datastructure::Keypoint>> myRefKeypoints;
SRef<datastructure::DescriptorBuffer> myRefDescriptors;
SRef<datastructure::Image> myCamImage;
std::vector<SRef<datastructure::Keypoint>> myCamKeypoints;
SRef<datastructure::DescriptorBuffer> myCamDescriptors;
std::vector<datastructure::DescriptorMatch> myMatches;
std::vector<SRef<datastructure::Point2Df>> myMatchedRefKeypoints;
std::vector<SRef<datastructure::Point2Df>> myMatchedCamKeypoints;
SRef<datastructure::Image> mySideBySideImage;
Now, you are ready to code the pipeline. You can start by the initialisation, which consists in . loading the reference image . detect its keypoints . extracts the corresponding descriptors . start a camera which id is passed in parameter of the program (for this tutorial, you will not need the intrinsic parameters).
// ADD HERE: Find a way to load an image
if (myImageLoader->loadImage(argv[1], myRefImage) != SolAR::FrameworkReturnCode::_SUCCESS)
{
LOG_INFO("load image KO");
}
else
{
LOG_INFO("load image OK");
}
// ADD HERE : Find a way to detect keypoints
myKeypointsDetector->setType(api::features::KeypointDetectorType::AKAZE2);
myKeypointsDetector->detect(myRefImage, myRefKeypoints);
// ADD HERE : Find a way to extract descriptors
myDescriptorExtractor->extract(myRefImage,myRefKeypoints, myRefDescriptors);
// ADD HERE : Launch the camera
if (myCamera->start(atoi(argv[2])) != FrameworkReturnCode::_SUCCESS) // Camera
{
LOG_ERROR("Camera with id {} does not exist", argv[3]);
return;
}
Great ! Now you are ready to code the loop of your pipeline. It consists in . getting the last image captured by the camera . detecting its keypoints . extracting its descriptors . matching them with the descriptors of the reference image . reindexing the keypoints that match together . drawing a line in a side by side overlay for each match . displaying the resulting image in a viewer, and restarting this loop until the user press the escape key.
// The escape key to exit the sample
char escape_key = 27;
bool process = true;
// The pipeline loop
while (true)
{
// Get the last image returned by the camera
if (myCamera->getNextImage(myCamImage) == SolAR::FrameworkReturnCode::_ERROR_)
{
process = false;
LOG_ERROR("Cannot get access to the image returned by the camera");
break;
}
// ADD HERE : Find a way to detect keypoints in the current image captured by the camera
myKeypointsDetector->detect(myCamImage, myCamKeypoints);
// ADD HERE : Find a way to extract descriptors for current image captured by the camera
myDescriptorExtractor->extract(myCamImage, myCamKeypoints, myCamDescriptors);
// ADD HERE : Find a way to match keypoints
myDescriptorMatcher->match(myRefDescriptors, myCamDescriptors, myMatches);
// ADD HERE : Find a way to reindex matched keypoints
myKeypointsReIndexer->reindex(myRefKeypoints, myCamKeypoints, myMatches, myMatchedRefKeypoints, myMatchedCamKeypoints);
// ADD HERE : Display on a side by side image the matches
mySBSOverlay->drawMatchesLines(myRefImage, myCamImage, mySideBySideImage, myMatchedRefKeypoints, myMatchedCamKeypoints);
if (myImageViewer->display("Matches", mySideBySideImage, &escape_key) == FrameworkReturnCode::_STOP)
break;
}
You can test this matching pipeline by printing the same image as for the keypoint detection tutorial.
Save it in your project folder and do not forget to set the path to the reference image as well as the camera id (generally 0) as parameters of your program (In QT creator, click on Projects, Run, and set the path of your image in Command line arguments).
You can click on run. A window should appear with a side by side overlay showing the reference image and the video stream of your camera.
If you present the printed reference image in front of your camera, green lines representing matches should appear in this window.
| You can see that the result is not so smooth compared to state of the art solution. That is normal, this tutorial is a first step that does not include optimizations such as multi-threading, GPU optimizations, or implementation improvements. Do not worry, these optimizations are planned in the SolAR roadmap and will come very soon. |
Following, the full source code of this tutorial:
// ADD HERE:: header files of the components you want to use
#include "SolARImageLoaderOpencv.h"
#include "SolARKeypointDetectorOpencv.h"
#include "SolARCameraOpencv.h"
#include "SolARDescriptorsExtractorAKAZE2Opencv.h"
#include "SolARDescriptorMatcherKNNOpencv.h"
#include "SolARKeypointsReIndexer.h"
#include "SolARSideBySideOverlayOpencv.h"
#include "SolARImageViewerOpencv.h"
using namespace SolAR;
using namespace SolAR::MODULES::OPENCV;
using namespace SolAR::MODULES::TOOLS;
namespace xpcf = org::bcom::xpcf;
void main(int argc,char** argv){
// To redirect log to the console
LOG_ADD_LOG_TO_CONSOLE();
// ADD HERE: declarations and instantiation of components
// see API ref on SolAR website
// Example to declare and create a camera:
// SRef<SolARCameraOpencv> camera = xpcf::utils::make_shared<SolARCameraopencv>());
SRef<SolARImageLoaderOpencv> myImageLoader = xpcf::utils::make_shared<SolARImageLoaderOpencv>();
SRef<SolARKeypointDetectorOpencv> myKeypointsDetector = xpcf::utils::make_shared<SolARKeypointDetectorOpencv>();
SRef<SolARCameraOpencv> myCamera = xpcf::utils::make_shared<SolARCameraOpencv>();
SRef<SolARDescriptorsExtractorAKAZE2Opencv> myDescriptorExtractor = xpcf::utils::make_shared<SolARDescriptorsExtractorAKAZE2Opencv>();
SRef<SolARDescriptorMatcherKNNOpencv> myDescriptorMatcher = xpcf::utils::make_shared<SolARDescriptorMatcherKNNOpencv>();
SRef<SolARKeypointsReIndexer> myKeypointsReIndexer = xpcf::utils::make_shared<SolARKeypointsReIndexer>();
SRef<SolARSideBySideOverlayOpencv> mySBSOverlay = xpcf::utils::make_shared<SolARSideBySideOverlayOpencv>();
SRef<SolARImageViewerOpencv> myImageViewer = xpcf::utils::make_shared<SolARImageViewerOpencv>();
// ADD HERE: declarations of data structures used to connect components
// Example to declare a SolARImage:
// SRef<SolAR::datastructure::SolARImage> inputImage;
SRef<datastructure::Image> myRefImage;
std::vector<SRef<datastructure::Keypoint>> myRefKeypoints;
SRef<datastructure::DescriptorBuffer> myRefDescriptors;
SRef<datastructure::Image> myCamImage;
std::vector<SRef<datastructure::Keypoint>> myCamKeypoints;
SRef<datastructure::DescriptorBuffer> myCamDescriptors;
std::vector<datastructure::DescriptorMatch> myMatches;
std::vector<SRef<datastructure::Point2Df>> myMatchedRefKeypoints;
std::vector<SRef<datastructure::Point2Df>> myMatchedCamKeypoints;
SRef<datastructure::Image> mySideBySideImage;
// ADD HERE: Find a way to load an image
if (myImageLoader->loadImage(argv[1], myRefImage) != SolAR::FrameworkReturnCode::_SUCCESS)
{
LOG_INFO("load image KO");
}
else
{
LOG_INFO("load image OK");
}
// ADD HERE : Find a way to detect keypoints
myKeypointsDetector->setType(api::features::KeypointDetectorType::AKAZE2);
myKeypointsDetector->detect(myRefImage, myRefKeypoints);
// ADD HERE : Find a way to extract descriptors
myDescriptorExtractor->extract(myRefImage,myRefKeypoints, myRefDescriptors);
// ADD HERE : Launch the camera
if (myCamera->start(atoi(argv[2])) != FrameworkReturnCode::_SUCCESS) // Camera
{
LOG_ERROR("Camera with id {} does not exist", argv[3]);
return;
}
// The escape key to exit the sample
char escape_key = 27;
bool process = true;
// The pipeline loop
while (true)
{
// Get the last image returned by the camera
if (myCamera->getNextImage(myCamImage) == SolAR::FrameworkReturnCode::_ERROR_)
{
process = false;
LOG_ERROR("Cannot get access to the image returned by the camera");
break;
}
// ADD HERE : Find a way to detect keypoints in the current image captured by the camera
myKeypointsDetector->detect(myCamImage, myCamKeypoints);
// ADD HERE : Find a way to extract descriptors for current image captured by the camera
myDescriptorExtractor->extract(myCamImage, myCamKeypoints, myCamDescriptors);
// ADD HERE : Find a way to match keypoints
myDescriptorMatcher->match(myRefDescriptors, myCamDescriptors, myMatches);
// ADD HERE : Find a way to reindex matched keypoints
myKeypointsReIndexer->reindex(myRefKeypoints, myCamKeypoints, myMatches, myMatchedRefKeypoints, myMatchedCamKeypoints);
// ADD HERE : Display on a side by side image the matches
mySBSOverlay->drawMatchesLines(myRefImage, myCamImage, mySideBySideImage, myMatchedRefKeypoints, myMatchedCamKeypoints);
if (myImageViewer->display("Matches", mySideBySideImage, &escape_key) == FrameworkReturnCode::_STOP)
break;
}
}
Solar provides a program based on Opencv that can be used to calibrate your camera device. This program is available in the SolARModuleOpenCV repository. It conforms to the SolAR paradigm and both Qt and Visual Studio projects are provided. A chessboard image (chessboard.png) and a input configuration file are also provided.
This tutorial is a simplified version of the one provided by OpenCV, feel free to visit OpenCV website to get details on the actual implementation. It will use a ready-to-run executable.
Download the zip file with the CameraCalibration executable
Unzip the downloaded file to directory of your choice.
Open a command prompt in that directory
Plug your camera device
Run the program : MyCameraCalibration + [Enter]
By default, the camera Id is supposed to be 0 and this is correct most of the time. Yet You may have to change that in order to make things working, e.g. on laptops where 0 is usually assigned to the built-in camera device and if you plan to calibrate an USB external camera for instance. In that case you may have to set the camera Id to 1. This is accomplished by running : MyCameraCalibration 1 + [Enter]
If input parameters are correct, you will get something like :
Press the 'g' key to start the process.
A number of positive detections (default = 10) will be taken with a minimum period of time between two detections (default value = 2 seconds). The will be further explained in the last paragraph : input/output files. A positive detection is when the chessboard is correctly identified. This is illustrated with a frozen picture displaying corners and lines :
Please, notice the bottom-right counter that indicates the number of positive detections so far.
When all the positive detections are obtained, the calibration is performed and the process is completed, then a bottom-right message indicates "Calibrated" :
Press [Esc] to close the camera view and to exit the program.
An output calibration file has been generated :
This file can be used directly in SolARModuleOpenCV samples and demo’s codes if a calibration file is required.
The input/output file names are hard coded in the code.
The input file is calibration_config.yml. An example is given which is described below :
calibration_config.yml content : # the number of inner corners on board width chessboard_width: 9 # the number of inner corners on board height chessboard_height: 6 # square size in some user-defined units square_size: 0.026 #fix aspect ratio (fx/fy) apsect_ration: 1 # number of frames to calibrate the camera : 10 is advised for a high calibration quality, you can put less if you are not so exigent nb_frames: 10 # OpenCV Flags for camera calibration flags: 0 # delay between each frame in milliseconds : 2 is good to let you enough time to move your camera and focus on the chessboard. delay: 2000
The output file is camera_calibration.yml. It contains the result of the calibration and the format is described below :
This program generates a file camera_calibration.yml.
Check the file date, to be sure that it has been generated when you run the SolARCameraCalibration.
The data in this file define the calibration parameters of your camera, and will hemp for computer vision and especially pose estimation.
calibration_time: "Wed Dec 6 14:02:31 2017"
image_width: 640
image_height: 480
board_width: 9
board_height: 6
square_size: 2.6000000536441803e-02
flags: 0
camera_matrix: !!opencv-matrix
rows: 3
cols: 3
dt: d
data: [ 6.2358844756875726e+02, 0., 3.1296501379528701e+02, 0.,
6.2510924611650637e+02, 2.6595453191051286e+02, 0., 0., 1. ]
distortion_coefficients: !!opencv-matrix
rows: 5
cols: 1
dt: d
data: [ 5.0406145631272294e-03, -7.3194070034412229e-01,
8.8401137738982200e-03, -4.1912068994392751e-03,
2.7609935737342024e+00 ]
This calibration requires that the chessboard is detected (positive detections) with, as far as possible, different poses of the camera. This is illustrated in the following video.
This tutorial will help you to write a program that will estimate the pose by using a natural image marker, in other words an image printed on a planar surface. The result will consists in displaying over the video captured by your camera a wired cube that will be positionned on your image marker.
The tutorial will take approximately 30mn to complete.
A PC configured with the SolAR framework (see setup.)
A PC configured with a C++ IDE (please note QT creator is recommended) (see setup.).
You have followed the tutorial on how to match keypoints in an image.
You have followed the tutorial on how calibrate your camera.
Load a 2D natural image marker.
Compute a homography.
Compute a Perspective-n-Points.
Display a 3D wired cube over your video captured by your camera.
Estimating the pose from a natural image based marker can be implemented with various approaches. In the pipeline presented next, we first compute a homography between the reference marker image and the marker seen in the video stream (faster than estimating directly the pose), then we estimate the position of the corners of the marker in the current image, and finally we apply a Perspective-n-Point algorithm on the four corners to estimate the pose of the camera. This pipeline consists in 9 OpenCV components and 5 components available in the module Tools. In red, the components used in the previous tutorial:
SolARMarker2DNaturalImageOpencv (interface SolAR::api::input::files::IMarker2DNaturalImage) will load a file describing your marker. This file defines the url of the image used for your marker as well as the size of your marker in a user-defined unit (meter, centimeter, etc.). The unit used for your marker must be the same for the entire pipeline (from your camera calibration to the unit defining the virtual objects you will display in augmentation). More details concerning the format of this file is given next.
SolARKeypointsDetectorOpencv (interface SolAR::api::features::IKeypointDetector) will detect the keypoints in your image and put them in a vector (keep the one used in the previous tutorial).
SolARDescriptorsExtractorAKAZE2Opencv (interface SolAR::api::features::IDescriptorsExtractor) will extract a descriptor for each detected keypoint. Here, we are using the AKAZE. Note that the nth descriptor of the output buffer corresponds to the nth input keypoint. This component outputs a buffer of descriptors. To improve performances, this buffer can directly point to third parties structures without the need to copy them.
SolARCameraOpencv (interface SolAR::api::input::devices::ICamera) will provide the video stream captured from a camera through a set of successive images. To start the camera, use the start method with the id of the camera (generally the id 0). Finally, you can access the last image captured by the camera by calling the nextImage method (keep the SolARCameraOpencv used in the previous tutorial).
SolARKeypointDetectorOpencv (interface SolAR::api::features::IKeypointDetector) will detect the keypoints in the current image of the camera stream (keep the one used in the previous tutorial).
SSolARDescriptorsExtractorAKAZE2Opencv (interface SolAR::api::features::IDescriptorsExtractor) will extract a descriptor for each keypoint detect in the current image captured by the camera.
SolARDescriptorsMatcherKNNOpencv (interface SolAR::api::features::IDescriptorMatcher) will match the descriptors of keypoints detected in the reference image with the descriptors of keypoints detected in the current image of the camera stream. This component outputs a vector of matches, a match being a pair of index. The first index of the pair corresponds to the index of the descriptor (resp. keypoint) in the descriptors buffer (resp. the vector of keypoints) of the reference image, the second index of the pair corresponds to the index of the descriptor (resp. keypoint) in the descriptors buffer (resp. the vector of keypoints) of the current image of the camera stream. Again, keep the SolARDescriptorsMatcherKNNOpencv used in the previous tutorial.
SolARKeypointsReindexer (interface SolAR::api::features::IKeypointsReIndexer) will output two vectors of 2D points where the nth 2D point of the first vector matches with the nth 2D point of the second vector. To do that, this component will use the two vectors of keypoints extracted from the two images as well as the vector of matches. You will find this component in the module called Tools. Keep the SolARKeypoinsReindexer used in the previous tutorial.
SolARHomographyEstimationOpencv (interface SolAR::api::solver::pose::IHomographyEstimation) will compute the homography between the two images. It takes as input parameters the two vectors of 2D points corresponding to the keypoints extracted from these two images that match. The result is a Transform2D corresponding to the 2D transformation from the reference image (here the marker image) to the target image (here the image captured by the camera).
SolAR2DTransform (interface SolAR::api::geom::I2DTransform) will transform a vector of 2D points according to the Transform2D passed in input. In this tutorial, we apply the homography computed in the previous step to a set of four points corresponding to the four corners (in pixel unit) of the natural image marker.
SolARHomographyValidation (interface SolAR::api::solver::pose::IHomographyValidation) will estimate if a homography is valid or not. This component takes two sets of points in input, the first one that corresponds to the reference points, and the second one that corresponds to their homography transformations. The component outputs a boolean estimating if the homography is correct or not.
SolARImage2WorldMapper4Marker2D (interface SolAR::api::solver::pose::IImage2WorldMapper) will compute the 3D position of the four corners of the marker in the 3D world coordinate system (worldPoints). To do that, we have to set as parameters the size of the image (digitalSize in pixels) as well as the size of the marker (worldSize in world unit defined by the user) to apply a cross-multiplication to the 4 corners of the marker in the image space (digitalPoints). Do it only if the homography has been validated in the previous step.
SolARPoseEstimationOpencv (interface SolAR::api::solver::pose::IPoseEstimation) will apply a P4P (Perspective 4-Points) algorithm on the four corners of the marker to estimate the pose of the camera. This algorithm consists in solving the non-linear system that defines the pose of the camera knowing the position of 4 points in the real space as well as their projections in the image plane of the camera. For this reason, this component needs the exact calibration parameters of the camera (intrinsic including distortion).
And as you want to check if the pose of the camera is correct, you will need to add the two following components:
SolAR3DOverlayOpencv (interface SolAR::api::display::I3DOverlay) will display a box over your current image captured by the video viewed from a viewpoint corresponding to the pose you will pass as a parameter. To do that, you will need to use the method drawBox. Note that the reference of the box is located in the center of its lower face.
SolARImageViewerOpencv (interface SolAR::api::display::IImageViewer) will display in a window the current image with a virtual box in overlay.
For this tutorial, you can copy and paste in your main.cpp the following code mainly based on the previous tutorial, namely Match Image with your video stream:
// ADD HERE:: header files of the components you want to use
#include "SolARKeypointDetectorOpencv.h"
#include "SolARDescriptorsExtractorAKAZE2Opencv.h"
#include "SolARCameraOpencv.h"
#include "SolARDescriptorMatcherKNNOpencv.h"
#include "SolARKeypointsReIndexer.h"
#include "SolARImageViewerOpencv.h"
using namespace SolAR;
using namespace SolAR::MODULES::OPENCV;
using namespace SolAR::MODULES::TOOLS;
namespace xpcf = org::bcom::xpcf;
void main(int argc,char** argv){
// To redirect log to the console
LOG_ADD_LOG_TO_CONSOLE();
// ADD HERE: declarations and instantiation of components
// see API ref on SolAR website
// Example to declare and create a camera:
// SRef<SolARCameraOpencv> camera = xpcf::utils::make_shared<SolARCameraopencv>());
SRef<SolARKeypointDetectorOpencv> myKeypointsDetector = xpcf::utils::make_shared<SolARKeypointDetectorOpencv>();
SRef<SolARDescriptorsExtractorAKAZE2Opencv> myDescriptorsExtractor = xpcf::utils::make_shared<SolARDescriptorsExtractorAKAZE2Opencv>();
SRef<SolARCameraOpencv> myCamera = xpcf::utils::make_shared<SolARCameraOpencv>();
SRef<SolARDescriptorMatcherKNNOpencv> myDescriptorMatcher = xpcf::utils::make_shared<SolARDescriptorMatcherKNNOpencv>();
SRef<SolARKeypointsReIndexer> myKeypointsReIndexer = xpcf::utils::make_shared<SolARKeypointsReIndexer>();
SRef<SolAR2DOverlayOpencv> my2DOverlay = xpcf::utils::make_shared<SolAR2DOverlayOpencv>();
SRef<SolARImageViewerOpencv> myViewer = xpcf::utils::make_shared<SolARImageViewerOpencv>();
// ADD HERE: declarations of data structures used to connect components
// Example to declare a SolARImage:
// SRef<SolAR::datastructure::SolARImage> inputImage;
SRef<datastructure::Image> myMarkerImage;
std::vector<SRef<datastructure::Keypoint>> myMarkerKeypoints;
SRef<datastructure::DescriptorBuffer> myMarkerDescriptors;
SRef<datastructure::Image> myCamImage;
std::vector<SRef<datastructure::Keypoint>> myCamKeypoints;
SRef<datastructure::DescriptorBuffer> myCamDescriptors;
std::vector<datastructure::DescriptorMatch> myMatches;
// ADD HERE: Find a way to load marker
// ADD HERE : Find a way to get the image from the marker
// ADD HERE : Find a way to detect keypoints from this marker image
myKeypointsDetector->setType(api::features::KeypointDetectorType::AKAZE2);
myKeypointsDetector->detect(myMarkerImage, myMarkerKeypoints);
// ADD HERE : Find a way to extract descriptors from this marker image
myDescriptorsExtractor->extract(myMarkerImage,myMarkerKeypoints, myMarkerDescriptors);
// ADD HERE : Launch the camera
if (myCamera->start(atoi(argv[3])) != FrameworkReturnCode::_SUCCESS) // Camera
{
LOG_ERROR("Camera with id {} does not exist", argv[3]);
return;
}
// ADD HERE : Load the calibration file of the camera
// ADD HERE : Initialize the image2World mapper
// ADD HERE : Initialize the Pose Estimation component
// ADD HERE : Initialize the Overlay 3D
// ADD HERE : Create the 4 corners of the marker
// The escape key to exit the tutorial
char escape_key = 27;
// color used to draw contours
std::vector<unsigned int> bgr{128, 128, 128};
// The pipeline loop
while (true)
{
// Get the last image returned by the camera
if (myCamera->getNextImage(myCamImage) == SolAR::FrameworkReturnCode::_ERROR_)
{
LOG_ERROR("Cannot get access to the image returned by the camera");
break;
}
myKeypointsDetector->detect(myCamImage, myCamKeypoints);
myDescriptorsExtractor->extract(myCamImage, myCamKeypoints, myCamDescriptors);
if (myDescriptorMatcher->match(myMarkerDescriptors, myCamDescriptors, myMatches) == api::features::DescriptorMatcher::DESCRIPTORS_MATCHER_OK)
{
myKeypointsReIndexer->reindex(myMarkerKeypoints, myCamKeypoints, myMatches, myMatchedMarkerKeypoints, myMatchedCamKeypoints);
// ADD HERE: Find the homography
// ADD HERE: Validate the Homography
// ADD HERE: Display a box over the marker
// ADD HERE: Display a box over the marker
}
if (myViewer->display("AR Box", myCamImage, &escape_key) == FrameworkReturnCode::_STOP)
break;
}
}
The resulting executable will take the 3 following arguments:
The url of a file describing the fiducial marker you are looking for.
The url of the file defining the calibration of your camera (generate it with the calibration tool).
The Id of your camera.
In this code, you already have 6 components on the 13 required to estimate the pose of your camera:
SolARKeypointsDetectorOpencv
SolARDescriptorsExtractorAKAZE2Opencv (used two times in the pipeline)
SolARCameraOpencv (used two times in the pipeline)
SolARDescriptorsMatcherKNNOpencv
SolARKeypointsReindexer
SolARImageViewerOpencv (to visualize the box in augmented reality)
As you will need a component embedded in the module Tools, if your are using QT Creator, be sure to add this module in the packagedependencies.txt to link your program with it, and then, run QMake.
xpcf|1.0.0|xpcf|thirdParties|http://repository.b-com.com/
boost|1.64.0|boost|thirdParties|http://repository.b-com.com/
opencv|3.2.0|opencv|thirdParties|http://repository.b-com.com/
spdlog|0.14.0|spdlog|thirdParties|http://repository.b-com.com/amc-generic
eigen|3.3.4|eigen|thirdParties|http://repository.b-com.com/amc-generic
SolARFramework|0.4.0|SolARFramework|bcomBuild|url_repo_artifactory
SolARModuleOpenCV|0.4.0|SolARModuleOpenCV|bcomBuild|url_repo_artifactory
SolARModuleTools|0.4.0|SolARModuleTools|bcomBuild|url_repo_artifactory
Now, in your main program, you can add the new components as described in blue in the pipeline schema shown above.
| This pipeline schema does not include the two components used to display a virtual box over the marker, do not forget to add them (SolAR3DOverlayOpencv and SolARImageViewerOpencv already used in the previous tutorial). |
First, add the new header files, and remove the ones that are no longer used:
// ADD HERE:: header files of the components you want to use
#include "SolARMarker2DNaturalImageOpencv.h"
#include "SolARKeypointDetectorOpencv.h"
#include "SolARDescriptorsExtractorAKAZE2Opencv.h"
#include "SolARCameraOpencv.h"
#include "SolARDescriptorMatcherKNNOpencv.h"
#include "SolARKeypointsReIndexer.h"
#include "SolARHomographyEstimationOpencv.h"
#include "SolAR2DTransform.h"
#include "SolARHomographyValidation.h"
#include "SolARImage2WorldMapper4Marker2D.h"
#include "SolARPoseEstimationOpencv.h"
#include "SolAR3DOverlayOpencv.h"
#include "SolAR2DOverlayOpencv.h"
#include "SolARImageViewerOpencv.h"
As you are using the module Tools, do not forget to add the corresponding namespace if not already done.
using namespace SolAR::MODULES::TOOLS;
Then, add the declaration and the instantiation of the new components to your code.
| the 2 components used for the keypoints detection and the descriptors extraction can be instantiated only once and can be reused for both the marker image and the current image of the video stream. |
// ADD HERE: declarations and instantiation of components
// see API ref on SolAR website
// Example to declare and create a camera:
// SRef<SolARCameraOpencv> camera = xpcf::utils::make_shared<SolARCameraopencv>());
SRef<SolARMarker2DNaturalImageOpencv> myMarker = xpcf::utils::make_shared<SolARMarker2DNaturalImageOpencv>();
SRef<SolARKeypointDetectorOpencv> myKeypointsDetector = xpcf::utils::make_shared<SolARKeypointDetectorOpencv>();
SRef<SolARDescriptorsExtractorAKAZE2Opencv> myDescriptorsExtractor = xpcf::utils::make_shared<SolARDescriptorsExtractorAKAZE2Opencv>();
SRef<SolARCameraOpencv> myCamera = xpcf::utils::make_shared<SolARCameraOpencv>();
SRef<SolARDescriptorMatcherKNNOpencv> myDescriptorMatcher = xpcf::utils::make_shared<SolARDescriptorMatcherKNNOpencv>();
SRef<SolARKeypointsReIndexer> myKeypointsReIndexer = xpcf::utils::make_shared<SolARKeypointsReIndexer>();
SRef<SolARHomographyEstimationOpencv> myHomographyEstimation = xpcf::utils::make_shared<SolARHomographyEstimationOpencv>();
SRef<SolAR2DTransform> my2DTransformer = xpcf::utils::make_shared<SolAR2DTransform>();
SRef<SolARHomographyValidation> myHomographyValidation = xpcf::utils::make_shared<SolARHomographyValidation>();
SRef<SolARImage2WorldMapper4Marker2D> myImage2WorldMapper = xpcf::utils::make_shared<SolARImage2WorldMapper4Marker2D>();
SRef<SolARPoseEstimationOpencv> myPoseEstimation = xpcf::utils::make_shared<SolARPoseEstimationOpencv>();
SRef<SolAR3DOverlayOpencv> my3DOverlay = xpcf::utils::make_shared<SolAR3DOverlayOpencv>();
SRef<SolAR2DOverlayOpencv> my2DOverlay = xpcf::utils::make_shared<SolAR2DOverlayOpencv>();
SRef<SolARImageViewerOpencv> myViewer = xpcf::utils::make_shared<SolARImageViewerOpencv>();
Then, if you have a look to the natural marker based pose estimation pipeline, you need to add several data structures to transmit informations between components. You need to declare 10 data structures.
// ADD HERE: declarations of data structures used to connect components
// Example to declare a SolARImage:
// SRef<SolAR::datastructure::SolARImage> inputImage;
SRef<datastructure::Image> myMarkerImage;
std::vector<SRef<datastructure::Keypoint>> myMarkerKeypoints;
SRef<datastructure::DescriptorBuffer> myMarkerDescriptors;
SRef<datastructure::Image> myCamImage;
std::vector<SRef<datastructure::Keypoint>> myCamKeypoints;
SRef<datastructure::DescriptorBuffer> myCamDescriptors;
std::vector<datastructure::DescriptorMatch> myMatches;
std::vector<SRef<datastructure::Point2Df>> myMatchedMarkerKeypoints;
std::vector<SRef<datastructure::Point2Df>> myMatchedCamKeypoints;
datastructure::Transform2Df myHomography;
std::vector<SRef<datastructure::Point2Df>> myCorners;
std::vector<SRef<datastructure::Point2Df>> myTransformedCorners;
std::vector<SRef<datastructure::Point3Df>> myWorldCorners;
datastructure::Pose myCamPose;
Now, you are ready to code the pipeline. You can start by the initialisation, which consists in
loading the marker,
detecting its keypoints,
extracting the corresponding descriptors,
starting a camera which id is passed in parameter of the program (for this tutorial, you will not need the intrinsic parameters),
setting the configuration of components that have parameters (SolARImage2WorldMapper4Marker2D, SolARPoseEstimationOpencv and SolAR3DOverlayOpencv),
create the four corners of the marker (in pixel size)
// ADD HERE: Find a way to load marker
if (myMarker->loadMarker(argv[1]) == SolAR::FrameworkReturnCode::_ERROR_)
{
LOG_ERROR("Cannot load marker");
return;
}
else
{
LOG_INFO("Marker loaded");
}
// ADD HERE : Find a way to get the image from the marker
if (myMarker->getImage(myMarkerImage) == SolAR::FrameworkReturnCode::_ERROR_)
{
LOG_ERROR("Cannot access marker image");
return;
}
// ADD HERE : Find a way to detect keypoints from this marker image
myKeypointsDetector->setType(api::features::KeypointDetectorType::AKAZE2);
myKeypointsDetector->detect(myMarkerImage, myMarkerKeypoints);
// ADD HERE : Find a way to extract descriptors from this marker image
myDescriptorsExtractor->extract(myMarkerImage,myMarkerKeypoints, myMarkerDescriptors);
// ADD HERE : Launch the camera
if (myCamera->start(atoi(argv[3])) != FrameworkReturnCode::_SUCCESS) // Camera
{
LOG_ERROR("Camera with id {} does not exist", argv[3]);
return;
}
// ADD HERE : Load the calibration file of the camera
if (myCamera->loadCameraParameters(argv[2]) == SolAR::FrameworkReturnCode::_ERROR_)
{
LOG_INFO("Cannot load camera calibration file");
}
// ADD HERE : Initialize the image2World mapper
myImage2WorldMapper->setParameters(myMarkerImage->getSize(), myMarker->getSize());
// ADD HERE : Initialize the Pose Estimation component
myPoseEstimation->setCameraParameters(myCamera->getIntrinsicsParameters(), myCamera->getDistorsionParameters());
// ADD HERE : Initialize the Overlay 3D
my3DOverlay->setCameraParameters(myCamera->getIntrinsicsParameters(), myCamera->getDistorsionParameters());
// ADD HERE : Create the 4 corners of the marker
myCorners.push_back(xpcf::utils::make_shared<Point2Df>(0,0));
myCorners.push_back(xpcf::utils::make_shared<Point2Df>((float)myMarkerImage->getWidth(),0));
myCorners.push_back(xpcf::utils::make_shared<Point2Df>((float)myMarkerImage->getWidth(),(float)myMarkerImage->getHeight()));
myCorners.push_back(xpcf::utils::make_shared<Point2Df>(0,(float)myMarkerImage->getHeight()));
Great ! Now you are ready to code the loop of your pipeline. It consists in:
getting the last image captured by the camera,
detecting its keypoints,
extracting its descriptors,
matching them with the descriptors of the reference image,
reindexing the keypoints that match together,
estimating an homography between the image of the marker and the image captured by the camera,
defining the four corners of the marker according to its size in pixels,
transforming the corner of the marker by the estimated homography,
validating the homography,
mapping the corners of the marker to obtain their 3D positions relatively to the world coordinate system,
estimating the pose of the camera,
overlaying a virtual 3D box over the image captured by the camera,
displaying the resulting image in a viewer,
restarting this loop until the user press the escape key.
| In this solution, we first estimate an homography and secondly apply a PnP on the four corners of the marker. Why not directly using the homography to estimate the pose ? Because the decomposition of a homography provides 4 candidates for the pose, including 2 candidates that can be easily rejected, and so an uncertainty on the 2 last candidates. If you want do go deeper on this topic, do not hesitate to read the study of Ezio Malis and Manuel Vargas available here. |
| There are other ways to estimate the pose of a camera based on a natural image marker. Some of them estimate the pose by applying a Perspective-n-Points algorithm (here implemented in SolARPoseEstimationOpencv) to all keypoints matching between the marker image and the camera image. Nevertheless aplpying a PnP algorithm on hundreds or even thousands of matches consume many resources so that it is not real time. |
// The escape key to exit the tutorial
char escape_key = 27;
// color used to draw contours
std::vector<unsigned int> bgr{128, 128, 128};
// The pipeline loop
while (true)
{
// Get the last image returned by the camera
if (myCamera->getNextImage(myCamImage) == SolAR::FrameworkReturnCode::_ERROR_)
{
LOG_ERROR("Cannot get access to the image returned by the camera");
break;
}
myKeypointsDetector->detect(myCamImage, myCamKeypoints);
myDescriptorsExtractor->extract(myCamImage, myCamKeypoints, myCamDescriptors);
if (myDescriptorMatcher->match(myMarkerDescriptors, myCamDescriptors, myMatches) == api::features::DescriptorMatcher::DESCRIPTORS_MATCHER_OK)
{
myKeypointsReIndexer->reindex(myMarkerKeypoints, myCamKeypoints, myMatches, myMatchedMarkerKeypoints, myMatchedCamKeypoints);
if (myHomographyEstimation->findHomography(myMatchedMarkerKeypoints, myMatchedCamKeypoints, myHomography) == api::solver::pose::HomographyEstimation::HOMOGRAPHY_ESTIMATION_OK)
{
my2DTransformer->transform(myCorners, myHomography, myTransformedCorners);
if (myHomographyValidation->isValid(myCorners, myTransformedCorners))
{
myImage2WorldMapper->map(myCorners, myWorldCorners);
myPoseEstimation->poseFromSolvePNP(myCamPose, myTransformedCorners, myWorldCorners);
my3DOverlay->drawBox(myCamPose, myMarker->getWidth(),myMarker->getHeight(),0.2, Transform3Df::Identity(), myCamImage);
}
}
}
if (myViewer->display("AR Box", myCamImage, &escape_key) == FrameworkReturnCode::_STOP)
break;
}
Please choose a natural image that will be used for your test, for example the same image as for the tutorial on how to detect keypoints in an image and print it. When the image is printed, please measure the size of your image.
You now have to create your YAML marker file defining the size of your real marker as well as the url of your image you will use:
%YAML:1.0
---
MarkerWidth: 0.285
MarkerHeight: 0.197
ImagePath: './graf1.png'
| Please note the size in a user-defined unit, it is preferable to use the same unit in the whole pipeline environment (calibration, etc..). |
Now, you can set the parameters of your executable with the url of your marker file, the url of your camera calibration parameters and the id of your camera (generally 0). Example
NaturalImageMarker.yml camera_calibration.yml 0
To set these parameters in QT creator, click on Projects, Run, and set the previous parameters in Command line arguments, and do not forget to set the working directory to the folder where you put your calibration and marker files.
| camera calibration file can be obtained by following the tutorial how calibrate your camera |
You can click on run. A window should appear with the video captured in live by your camera. If you put your natural image marker in front of your camera, you should see a virtual box displayed over your natural image marker. To exit the applictaion, just press the escape key.
| You can see that the result is not so smooth compared to state of the art solution. That is normal, this tutorial is a first step that does not include optimizations such as multi-threading, GPU optimizations, or implementation improvements. Do not worry, these optimizations are planned in the SolAR roadmap and will come very soon. |
Following, the full source code of this tutorial:
// ADD HERE:: header files of the components you want to use
#include "SolARMarker2DNaturalImageOpencv.h"
#include "SolARKeypointDetectorOpencv.h"
#include "SolARDescriptorsExtractorAKAZE2Opencv.h"
#include "SolARCameraOpencv.h"
#include "SolARDescriptorMatcherKNNOpencv.h"
#include "SolARKeypointsReIndexer.h"
#include "SolARHomographyEstimationOpencv.h"
#include "SolAR2DTransform.h"
#include "SolARHomographyValidation.h"
#include "SolARImage2WorldMapper4Marker2D.h"
#include "SolARPoseEstimationOpencv.h"
#include "SolAR3DOverlayOpencv.h"
#include "SolARImageViewerOpencv.h"
using namespace SolAR;
using namespace SolAR::MODULES::OPENCV;
using namespace SolAR::MODULES::TOOLS;
namespace xpcf = org::bcom::xpcf;
void main(int argc,char** argv){
// To redirect log to the console
LOG_ADD_LOG_TO_CONSOLE();
// ADD HERE: declarations and instantiation of components
// see API ref on SolAR website
// Example to declare and create a camera:
// SRef<SolARCameraOpencv> camera = xpcf::utils::make_shared<SolARCameraopencv>());
SRef<SolARMarker2DNaturalImageOpencv> myMarker = xpcf::utils::make_shared<SolARMarker2DNaturalImageOpencv>();
SRef<SolARKeypointDetectorOpencv> myKeypointsDetector = xpcf::utils::make_shared<SolARKeypointDetectorOpencv>();
SRef<SolARDescriptorsExtractorAKAZE2Opencv> myDescriptorsExtractor = xpcf::utils::make_shared<SolARDescriptorsExtractorAKAZE2Opencv>();
SRef<SolARCameraOpencv> myCamera = xpcf::utils::make_shared<SolARCameraOpencv>();
SRef<SolARDescriptorMatcherKNNOpencv> myDescriptorMatcher = xpcf::utils::make_shared<SolARDescriptorMatcherKNNOpencv>();
SRef<SolARKeypointsReIndexer> myKeypointsReIndexer = xpcf::utils::make_shared<SolARKeypointsReIndexer>();
SRef<SolARHomographyEstimationOpencv> myHomographyEstimation = xpcf::utils::make_shared<SolARHomographyEstimationOpencv>();
SRef<SolAR2DTransform> my2DTransformer = xpcf::utils::make_shared<SolAR2DTransform>();
SRef<SolARHomographyValidation> myHomographyValidation = xpcf::utils::make_shared<SolARHomographyValidation>();
SRef<SolARImage2WorldMapper4Marker2D> myImage2WorldMapper = xpcf::utils::make_shared<SolARImage2WorldMapper4Marker2D>();
SRef<SolARPoseEstimationOpencv> myPoseEstimation = xpcf::utils::make_shared<SolARPoseEstimationOpencv>();
SRef<SolAR3DOverlayOpencv> my3DOverlay = xpcf::utils::make_shared<SolAR3DOverlayOpencv>();
SRef<SolARImageViewerOpencv> myViewer = xpcf::utils::make_shared<SolARImageViewerOpencv>();
// ADD HERE: declarations of data structures used to connect components
// Example to declare a SolARImage:
// SRef<SolAR::datastructure::SolARImage> inputImage;
SRef<datastructure::Image> myMarkerImage;
std::vector<SRef<datastructure::Keypoint>> myMarkerKeypoints;
SRef<datastructure::DescriptorBuffer> myMarkerDescriptors;
SRef<datastructure::Image> myCamImage;
std::vector<SRef<datastructure::Keypoint>> myCamKeypoints;
SRef<datastructure::DescriptorBuffer> myCamDescriptors;
std::vector<datastructure::DescriptorMatch> myMatches;
std::vector<SRef<datastructure::Point2Df>> myMatchedMarkerKeypoints;
std::vector<SRef<datastructure::Point2Df>> myMatchedCamKeypoints;
datastructure::Transform2Df myHomography;
std::vector<SRef<datastructure::Point2Df>> myCorners;
std::vector<SRef<datastructure::Point2Df>> myTransformedCorners;
std::vector<SRef<datastructure::Point3Df>> myWorldCorners;
datastructure::Pose myCamPose;
// ADD HERE: Find a way to load marker
if (myMarker->loadMarker(argv[1]) == SolAR::FrameworkReturnCode::_ERROR_)
{
LOG_ERROR("Cannot load marker");
return;
}
else
{
LOG_INFO("Marker loaded");
}
// ADD HERE : Find a way to get the image from the marker
if (myMarker->getImage(myMarkerImage) == SolAR::FrameworkReturnCode::_ERROR_)
{
LOG_ERROR("Cannot access marker image");
return;
}
// ADD HERE : Find a way to detect keypoints from this marker image
myKeypointsDetector->setType(api::features::KeypointDetectorType::AKAZE2);
myKeypointsDetector->detect(myMarkerImage, myMarkerKeypoints);
// ADD HERE : Find a way to extract descriptors from this marker image
myDescriptorsExtractor->extract(myMarkerImage,myMarkerKeypoints, myMarkerDescriptors);
// ADD HERE : Launch the camera
if (myCamera->start(atoi(argv[3])) != FrameworkReturnCode::_SUCCESS) // Camera
{
LOG_ERROR("Camera with id {} does not exist", argv[3]);
return;
}
// ADD HERE : Load the calibration file of the camera
if (myCamera->loadCameraParameters(argv[2]) == SolAR::FrameworkReturnCode::_ERROR_)
{
LOG_INFO("Cannot load camera calibration file");
}
// ADD HERE : Initialize the image2World mapper
myImage2WorldMapper->setParameters(myMarkerImage->getSize(), myMarker->getSize());
// ADD HERE : Initialize the Pose Estimation component
myPoseEstimation->setCameraParameters(myCamera->getIntrinsicsParameters(), myCamera->getDistorsionParameters());
// ADD HERE : Initialize the Overlay 3D
my3DOverlay->setCameraParameters(myCamera->getIntrinsicsParameters(), myCamera->getDistorsionParameters());
// ADD HERE : Create the 4 corners of the marker
myCorners.push_back(xpcf::utils::make_shared<Point2Df>(0,0));
myCorners.push_back(xpcf::utils::make_shared<Point2Df>((float)myMarkerImage->getWidth(),0));
myCorners.push_back(xpcf::utils::make_shared<Point2Df>((float)myMarkerImage->getWidth(),(float)myMarkerImage->getHeight()));
myCorners.push_back(xpcf::utils::make_shared<Point2Df>(0,(float)myMarkerImage->getHeight()));
// The escape key to exit the tutorial
char escape_key = 27;
// color used to draw contours
std::vector<unsigned int> bgr{128, 128, 128};
// The pipeline loop
while (true)
{
// Get the last image returned by the camera
if (myCamera->getNextImage(myCamImage) == SolAR::FrameworkReturnCode::_ERROR_)
{
LOG_ERROR("Cannot get access to the image returned by the camera");
break;
}
myKeypointsDetector->detect(myCamImage, myCamKeypoints);
myDescriptorsExtractor->extract(myCamImage, myCamKeypoints, myCamDescriptors);
if (myDescriptorMatcher->match(myMarkerDescriptors, myCamDescriptors, myMatches) == api::features::DescriptorMatcher::DESCRIPTORS_MATCHER_OK)
{
myKeypointsReIndexer->reindex(myMarkerKeypoints, myCamKeypoints, myMatches, myMatchedMarkerKeypoints, myMatchedCamKeypoints);
// ADD HERE: Find the homography
if (myHomographyEstimation->findHomography(myMatchedMarkerKeypoints, myMatchedCamKeypoints, myHomography) == api::solver::pose::HomographyEstimation::HOMOGRAPHY_ESTIMATION_OK)
{
my2DTransformer->transform(myCorners, myHomography, myTransformedCorners);
// ADD HERE: Validate the Homography
if (myHomographyValidation->isValid(myCorners, myTransformedCorners))
{
myImage2WorldMapper->map(myCorners, myWorldCorners);
// ADD HERE: Display a box over the marker
myPoseEstimation->poseFromSolvePNP(myCamPose, myTransformedCorners, myWorldCorners);
// ADD HERE: Display a box over the marker
my3DOverlay->drawBox(myCamPose, myMarker->getWidth(),myMarker->getHeight(),0.2, Transform3Df::Identity(), myCamImage);
}
}
}
if (myViewer->display("AR Box", myCamImage, &escape_key) == FrameworkReturnCode::_STOP)
break;
}
}
This tutorial will walk you through the implementation of one of the first solution to do augmented reality: The fiducial marker. The tutorial will take approximately one hour to complete.
A PC configured with the SolAR framework (see setup.)
A PC configured with a C++ IDE (please note QT creator is recommended) (see setup.).
You have followed the tutorial on how to create your first SolAR Project.
You have followed the tutorial on how calibrate your camera.
Load a squared binary marker
Apply conversions and filters on images
Detect contours and filter them
Extract squared binary pattern from an image
Compute the pose of the camera from a fiducial marker based on a Perspective-n-Points algorithm.
Fiducial markers are generally white and black 2D patterns that are easily identifiable in an image and that holds all information required to easily compute the pose of a camera that records it.
They could be squared, circular, or defined by a set of binary points, and they have been and are still widely used as they offer robustness to estimate the pose of a camera for augmented reality applications.
Fiducial maker based approaches are mostly built according to the following steps:
Initialization: to load the fiducial marker and start the camera.
Detection: to find fiducial patterns in the current image captured by the camera.
Recognition: to select among the fiducial patterns detected in the current image the one(s) we are looking for.
Camera pose estimation: to estimate the position and orientation of the camera in the coordinate system of the fiducial marker(s).
In this tutorial, we focus on estimating the pose for one squared binary marker based on a pattern defined by a squared grid of black and white cells surrounded by a black border:
| In our implementation, the number of cells in height and width must be similar, and the thickness of the border must be equal to the thickness of a cell of the pattern. Finally, this pattern must not present a discrete rotional symmetry, meaning that if you rotate your pattern 90, 180 or 270 degree, you must not get the same pattern. |
We will now explain which components we will use to implement a pose estimation pipeline based on squared fiducial markers. The following schema presents the pipeline we will implement in this tutorial :
This pipeline seems quite complex, but you will see that its implementation will take few minutes thanks to the SolAR framework. Following, we detail each component used during the four steps of our pipeline:
Marker2DSquaredBinary: This component loads a file describing the squared binary marker. It is a yaml file defining the real size of the marker (including borders) in the user-defined unit (centimeter, meter, …) as well as the squared binary pattern where 1 defined a white cell and 0 a black cell.
An example of file defining a squared binary marker is given in the next section.
DescriptorExtractorSBPatternOpencv: A squared binary pattern is represented by a matrix of Boolean, and this component simply concatenates each row of this matrix to create a vector of boolean (DescriptorBuffer) representing the descriptor of the squared binary pattern.
Camera: This component loads a file describing the intrinsic parameters of the camera estimated thanks to a calibration tool you should have used if you have already followed the tutorial on how calibrate your camera. Then, the component will start the camera (by indicating in argument the id of the camera to start, generaly 0), and you can now get the current image by calling the nextImage() method in a loop.
| Without a good calibration, the pose estimated by this pipeline will be certainly wrong. |
ImageConvertor: This component converts the color image captured by the camera to a grey image.
ImageFilter: This component applies a filter to the image. Here, we apply a binarize filter to obtain a black and white (or binary) image. This filter requires a threshold between 0 and 255 to select if a grey pixel becomes black or white. If you set this threshold to -1, the threshold is automatically computed according to the OTSU method based on a histogram computed on the whole image.
| This filter is the weak point of this pipeline as the binarize threshold is gloabl, whereas it should be locally computed by region of the current image in order to reduce the impact of local specular reflections on the marker or overexposures. This can be improved thanks to another compnents (feel free to contribute to the SolAR OpenSource project by implementing components). |
ContoursExtractor: This component extracts contours from the previous binary image. As we are looking for the contours of squared markers, so containing exactly 4 edges, we set the minimum edges of the contours to 4.
ContoursFilterBinaryMarker: This component first filters closed contours and approximates low curves defined by a set of successive edges by a single edge. Then, it selects only contours with four edges. You can set the minimum size of the contours you want to keep (size defined in pixels) in order to exclude small quad contours.
PerspectiveControllers: This component warps and crops the binary image to extract a set of sub-images whose borders are defined by the contours filtered by the previous component.
DescriptorExtractorSBPattern: This component checks if a sub-image corresponds to a squared binary marker (by detecting if the borders of the sub-image are black). If yes, it extracts its squared binary pattern descriptors (by detecting the color of each cell of the pattern, and this for the four rotations of the sub-image).
DescriptorMatcherRadius: This component compares the squared binary pattern of the marker we are looking for with the squared binary patterns extracted from the current image. It does it by computing the hamming distance between the descriptors.
SBPatternReindexer: This component creates two vector of points:
the first one with the 4 corners of the marker (in cells, meaning in the pattern space)
the second one with the 4 corners of the marker extracted from the image (in pixel, meaning in image space).
| size of the pattern is the number of cells without the borders defining the pattern, for example 5 if it is a 5x5 pattern). |
Image2WorldMapper4Marker2D: This component computes the 3D position of the 4 corners of the marker in the 3D coordinate system of the real space.
| To do that, we have to set as parameters the size of the pattern (in cells) as well as the size of the marker (in world unit defined by the user) to apply a cross-multiplication to the 4 corners of the marker given by the previous SBPatternReindexer component. |
PoseEstimation: This component applies a PnP (Perspective-n-Points) algorithm on the 4 corners of the marker to estimate the pose of the camera. This algorithm consists in solving the non-linear system that defines the pose of the camera knowing the position of 4 points in the real space as well as their projections in the image plane of the camera.
First, create a new QTcreator project and follow the instructions available in the first tutorial here. As you need both OpenCV and Tools modules, you will need to add both of them as well as third parties in the packagedependencies.txt file:
xpcf|1.0.0|xpcf|thirdParties|http://repository.b-com.com/
boost|1.64.0|boost|thirdParties|http://repository.b-com.com/
opencv|3.2.0|opencv|thirdParties|http://repository.b-com.com/
spdlog|0.14.0|spdlog|thirdParties|http://repository.b-com.com/amc-generic
eigen|3.3.4|eigen|thirdParties|http://repository.b-com.com/amc-generic
SolARFramework|0.3.0|SolARFramework|bcomBuild|url_repo_artifactory
SolARModuleOpenCV|0.3.0|SolARModuleOpenCV|bcomBuild|url_repo_artifactory
SolARModuleTools|0.3.0|SolARModuleTools|bcomBuild|url_repo_artifactory
Now you can run QMake (click right of your project in QT creator, and click on run QMake). Every pathes for headers and libraries are now set.
The resulting executable will take the 3 following arguments:
The url of a file describing the fiducial marker you are looking for.
The url of the file defining the calibration of your camera (generate it with the calibration tool).
The Id of your camera.
In this tutorial, we will use the following 6x6 marker: image::images/FiducialMarker/fiducialMarker.gif[300,300, align="center", title="Fiducial Image Marker: click to print",link="../images/FiducialMarker/fiducialMarker.gif" ]
You can print it now.
| The size of this marker is 6 by 6. You can use the marker size of your choice as long as it is squared (4x4, 5x5, etc.). |
Now, you can create your marker file. To do it, create a .yml file, copy the following lines and save it under fiducialMarker.yml in your project folder:
%YAML:1.0
---
MarkerWidth: 0.157
MarkerHeight: 0.157
Pattern: !!opencv-matrix
rows: 6
cols: 6
dt: u
data: [ 1,0,0,0,1,1,1,0,0,1,1,1,1,1,0,1,0,1,0,0,1,0,0,1,1,0,0,1,1,0,1,1,1,0,0,1 ]
MarkerWidth and MarkerHieght correspond to the real size of your maker in your user-defined unit. Typically, in our example, the size is given in meters. So take your ruler and measure the size of your marker (including black borders).
| The user-defined unit have to be common for your whole pipeline, including camera calibration or for the unit defining your augmentations. For example, when you set the size of a cell of a chessboard used for camera calibration, be careful that this size is defined according to the same user-defined unit used for defining the marker size. |
The Pattern parameters is a matrix describing which cell are white (1) or black(0).
Then, if it is not already done, calibrate your camera by following the tutorial how calibrate your camera. Copy the resulting calibration file in your project folder.
Now, in your IDE, you can set the command line arguments of your executable with pathes relative to your executable folder. To set these parameters in QT creator, click on Projects, Run, and set the previous parameters in Command line arguments, and do not forget to set the working directory to the folder where you put your calibration and marker files. Your command line should look like:
fiducialMarker.yml camera_calibration.yml 0
Now, your development environment is ready. Then, you have to fill the main.cpp.
To start filling your main.cpp, you can replace its code by the fill-in-the-blank one:
// ADD HERE:: header files of the components you want to use
using namespace SolAR;
using namespace SolAR::MODULES::OPENCV;
using namespace SolAR::MODULES::TOOLS;
namespace xpcf = org::bcom::xpcf;
void main(int argc,char** argv){
// To redirect log to the console
LOG_ADD_LOG_TO_CONSOLE();
// ADD HERE: declarations and instantiation of components
// see API ref on SolAR website
// Example to declare and create a camera:
// SRef<SolARCameraOpencv> camera = xpcf::utils::make_shared<SolARCameraopencv>());
// ADD HERE: declarations of data structures used to connect components
// Example to declare a SolARImage:
// SRef<SolAR::datastructure::SolARImage> inputImage;
// Initialize your components before starting the pipeline loop
// ADD HERE: Find a way to load your fiducial marker
// ADD HERE : Find a way to extract squared binary pattern descriptors from the reference fiducial marker
// ADD HERE : Launch the camera
// ADD HERE : Load the calibration file of the camera
// ADD HERE : Initialization of the Contours Extractor
// ADD HERE : Initialization of the Contours Filter
// ADD HERE : Initialization of the Perspective Controller
// ADD HERE : Initialization of the Squared Binary Pattern Extractor
// ADD HERE : Initialization of the Pattern Reindexer
// ADD HERE : Initialize the image2World mapper
// ADD HERE : Initialize the Pose Estimation component
// ADD HERE : Initialize the 3D Overlay component
// The escape key to exit the sample
char escape_key = 27;
// The pipeline loop
while (true)
{
// ADD HERE : Get the last image returned by the camera
// ADD HERE : The calls to components to get your camera pose
// ADD HERE : Draw a window with a box displayed over the fiducial marker
}
}
Let’s start by including header files, the ones correponding to the components described in the previous pipeline schema.
If done, that should look like that:
// ADD HERE:: header files of the components you want to use
#include "SolARMarker2DSquaredBinaryOpencv.h"
#include "SolARDescriptorsExtractorSBPatternOpencv.h"
#include "SolARCameraOpencv.h"
#include "SolARImageConvertorOpencv.h"
#include "SolARImageFilterOpencv.h"
#include "SolARContoursExtractorOpencv.h"
#include "SolARContoursFilterBinaryMarkerOpencv.h"
#include "SolARPerspectiveControllerOpencv.h"
#include "SolARDescriptorMatcherRadiusOpencv.h"
#include "SolARSBPatternReIndexer.h"
#include "SolARImage2WorldMapper4Marker2D.h"
#include "SolARPoseEstimationOpencv.h"
#include "SolAR3DOverlayOpencv.h"
#include "SolARImageViewerOpencv.h"
Next, you have to declare and instanciate all components you will need for the camera pose estimation based on fiducial marker. In the main.cpp, you can find in comment how to easily declare and instanciate a camera component:
SRef<SolARCameraOpencv> camera = xpcf::utils::make_shared<SolARCameraopencv>());
You can do it now for all 12 components of the pipeline described in the pipeline schema.
| You do not have to declare and instantiate twice the DescriptorsExtractorSBPattern component as it will be used both for the reference marker and for the extraction of fiducial patterns in the current image captured by the camera. |
// ADD HERE: declarations and instantiation of components
// see API ref on SolAR website
// Example to declare and create a camera:
// SRef<SolARCameraOpencv> camera = xpcf::utils::make_shared<SolARCameraopencv>());
SRef<SolARMarker2DSquaredBinaryOpencv> myMarker = xpcf::utils::make_shared<SolARMarker2DSquaredBinaryOpencv>();
SRef<SolARDescriptorsExtractorSBPatternOpencv> mySBPatternExtractor = xpcf::utils::make_shared<SolARDescriptorsExtractorSBPatternOpencv>();
SRef<SolARCameraOpencv> myCamera = xpcf::utils::make_shared<SolARCameraOpencv>();
SRef<SolARImageConvertorOpencv> myImageConvertor = xpcf::utils::make_shared<SolARImageConvertorOpencv>();
SRef<SolARImageFilterOpencv> myImageFilter = xpcf::utils::make_shared<SolARImageFilterOpencv>();
SRef<SolARContoursExtractorOpencv> myContoursExtractor = xpcf::utils::make_shared<SolARContoursExtractorOpencv>();
SRef<SolARContoursFilterBinaryMarkerOpencv> myContoursFilter = xpcf::utils::make_shared<SolARContoursFilterBinaryMarkerOpencv>();
SRef<SolARPerspectiveControllerOpencv> myPerspectiveController = xpcf::utils::make_shared<SolARPerspectiveControllerOpencv>();
SRef<SolARDescriptorMatcherRadiusOpencv> myMatcher = xpcf::utils::make_shared<SolARDescriptorMatcherRadiusOpencv>();
SRef<SolARSBPatternReIndexer> myPatternReindexer = xpcf::utils::make_shared<SolARSBPatternReIndexer>();
SRef<SolARImage2WorldMapper4Marker2D> myImage2WorldMapper = xpcf::utils::make_shared<SolARImage2WorldMapper4Marker2D>();
SRef<SolARPoseEstimationOpencv> myPoseEstimation = xpcf::utils::make_shared<SolARPoseEstimationOpencv>();
SRef<SolAR3DOverlayOpencv> my3DOverlay = xpcf::utils::make_shared<SolAR3DOverlayOpencv>();
SRef<SolARImageViewerOpencv> myViewer = xpcf::utils::make_shared<SolARImageViewerOpencv>();
Your components will exchange data through data structures when running the pipeline. You need to declare all those shown in figure fiducial marker pipeline schema.
// ADD HERE: declarations of data structures used to connect components
// Example to declare a SolARImage:
// SRef<datastructure::Image> myImage;
SRef<datastructure::Image> myCamImage;
SRef<datastructure::Image> myGreyImage;
SRef<datastructure::Image> myBinaryImage;
SRef<datastructure::SquaredBinaryPattern> myMarkerPattern;
std::vector<SRef<datastructure::Contour2Df>> myContours;
std::vector<SRef<datastructure::Contour2Df>> myFilteredContours;
std::vector<SRef<datastructure::Image>> myPatches;
std::vector<SRef<datastructure::Contour2Df>> myRecognizedContours;
SRef<datastructure::DescriptorBuffer> myMarkerSBPatternsDescriptors;
SRef<datastructure::DescriptorBuffer> myCamSBPatternsDescriptors;
std::vector<datastructure::DescriptorMatch> myMatches;
std::vector<SRef<datastructure::Point2Df>> myMarkerCorners;
std::vector<SRef<datastructure::Point2Df>> myCamCorners;
std::vector<SRef<datastructure::Point3Df>> myWorldCorners;
datastructure::Pose myPose;
Now, you are ready to code the pipeline. You can start by the initialisation, which consists in:
loading the fiducial marker (first argument of your program),
extracting the pattern descriptors of the current fiducial marker
starting a camera which id is passed in argument of the program (third argument of your program),
loading and setting your camera calibration file (second argument of your program),
setting the configuration of components that have parameters:
SolARContoursExtractorOpencv,
SolARImageFilterOpencv,
SolARPerspectiveControllerOpencv,
SolARDescriptorsExtractorSBPatternOpencv,
SolARSBPatternReIndexer,
SolARImage2WorldMapper4Marker2D,
SolARPoseEstimationOpencv,
SolAR3DOverlayOpencv.
Your pipeline initialization should look like the following one:
// Initialize your components before starting the pipeline loop
// ADD HERE: Find a way to load marker
if (myMarker->loadMarker(argv[1]) == SolAR::FrameworkReturnCode::_ERROR_)
{
LOG_ERROR("Cannot load marker");
return;
}
else
{
LOG_INFO("Marker loaded");
}
// ADD HERE : Find a way to extract squared binary pattern descriptors from the reference fiducial marker
myMarkerPattern = myMarker->getPattern();
mySBPatternExtractor->extract(myMarkerPattern, myMarkerSBPatternsDescriptors);
// ADD HERE : Launch the camera
if (myCamera->start(atoi(argv[3])) != FrameworkReturnCode::_SUCCESS) // Camera
{
LOG_ERROR("Camera with id {} does not exist", argv[3]);
return;
}
// ADD HERE : Load the calibration file of the camera
if (myCamera->loadCameraParameters(argv[2]) == SolAR::FrameworkReturnCode::_ERROR_)
{
LOG_INFO("Cannot load camera calibration file");
}
// ADD HERE : Initialization of the Contours Extractor
myContoursExtractor->setParameters(4);
// ADD HERE : Initialization of the Contours Filter
myContoursFilter->setParameters(20);
// ADD HERE : Initialization of the Perspective Controller
myPerspectiveController->setParameters({640,480});
// ADD HERE : Initialization of the Squared Binary Pattern Extractor
mySBPatternExtractor->setParameters(myMarkerPattern->getSize());
// ADD HERE : Initialization of the Pattern Reindexer
myPatternReindexer->setParameters(myMarkerPattern->getSize());
// ADD HERE : Initialize the image2World mapper
myImage2WorldMapper->setParameters({(uint32_t)myMarkerPattern->getSize(),(uint32_t)myMarkerPattern->getSize()}, myMarker->getSize());
// ADD HERE : Initialize the Pose Estimation component
myPoseEstimation->setCameraParameters(myCamera->getIntrinsicsParameters(), myCamera->getDistorsionParameters());
// ADD HERE : Initialize the 3D Overlay component
my3DOverlay->setCameraParameters(myCamera->getIntrinsicsParameters(), myCamera->getDistorsionParameters());
Great ! Now you are ready to code the loop of your pipeline. It consists in:
getting the last image captured by the camera,
converting this image in greyscale,
binarizing the greyscale image to obtain a black and white image,
extracting contours from the binary image,
filtering contours to keep those which can correspond to a fiducial pattern,
applying a perspective control for each contour to extract a small image representing each pattern candidate,
extracting a squared binary pattern descriptors from each small image,
matching these previous descriptors with the descriptor of the reference marker pattern,
reindexing contours according to matches,
mapping the corners of the marker to obtain their 3D positions relatively to the world coordinate system,
estimating the pose of the camera,
overlaying a virtual 3D box over marker in the image captured by the camera,
displaying the resulting image in a viewer,
restarting this loop until the user press the escape key.
// The escape key to exit the sample
char escape_key = 27;
// The pipeline loop
while (true)
{
// ADD HERE : Get the last image returned by the camera
if (myCamera->getNextImage(myCamImage) == SolAR::FrameworkReturnCode::_ERROR_)
{
LOG_ERROR("Cannot get access to the image returned by the camera");
break;
}
// ADD HERE : the call to components to get your camera pose
myImageConvertor->convert(myCamImage, myGreyImage, Image::ImageLayout::LAYOUT_GREY);
myImageFilter->binarize(myGreyImage, myBinaryImage, -1, 255);
myContoursExtractor->extract(myBinaryImage, myContours);
myContoursFilter->filter(myContours, myFilteredContours);
myPerspectiveController->correct(myBinaryImage, myFilteredContours, myPatches);
if (mySBPatternExtractor->extract(myPatches, myFilteredContours, myCamSBPatternsDescriptors, myRecognizedContours) != FrameworkReturnCode::_ERROR_)
{
if (myMatcher->match(myMarkerSBPatternsDescriptors, myCamSBPatternsDescriptors, myMatches) == api::features::DescriptorMatcher::DESCRIPTORS_MATCHER_OK)
{
myPatternReindexer->reindex(myRecognizedContours, myMatches, myMarkerCorners, myCamCorners);
myImage2WorldMapper->map(myMarkerCorners, myWorldCorners);
myPoseEstimation->poseFromSolvePNP(myPose, myCamCorners, myWorldCorners);
my3DOverlay->drawBox(myPose, myMarker->getWidth(), myMarker->getHeight(), 0.1, Transform3Df::Identity(), myCamImage);
}
}
// ADD HERE : Draw a window with a box displayed over the fiducial marker
if (myViewer->display("AR Box", myCamImage, &escape_key) == FrameworkReturnCode::_STOP)
break;
}
You can click on run. A window should appear with the video captured in live by your camera. If you put your squared binary fiducial marker you have previously printed in front of your camera, you should see a virtual box displayed over it. To exit the applictaion, just press the escape key.
Following, the full source code of this tutorial:
// ADD HERE:: header files of the components you want to use
#include "SolARMarker2DSquaredBinaryOpencv.h"
#include "SolARDescriptorsExtractorSBPatternOpencv.h"
#include "SolARCameraOpencv.h"
#include "SolARImageConvertorOpencv.h"
#include "SolARImageFilterOpencv.h"
#include "SolARContoursExtractorOpencv.h"
#include "SolARContoursFilterBinaryMarkerOpencv.h"
#include "SolARPerspectiveControllerOpencv.h"
#include "SolARDescriptorMatcherRadiusOpencv.h"
#include "SolARSBPatternReIndexer.h"
#include "SolARImage2WorldMapper4Marker2D.h"
#include "SolARPoseEstimationOpencv.h"
#include "SolAR3DOverlayOpencv.h"
#include "SolARImageViewerOpencv.h"
using namespace SolAR;
using namespace SolAR::MODULES::OPENCV;
using namespace SolAR::MODULES::TOOLS;
namespace xpcf = org::bcom::xpcf;
void main(int argc,char** argv){
// To redirect log to the console
LOG_ADD_LOG_TO_CONSOLE();
// ADD HERE: declarations and instantiation of components
// see API ref on SolAR website
// Example to declare and create a camera:
// SRef<SolARCameraOpencv> camera = xpcf::utils::make_shared<SolARCameraopencv>());
SRef<SolARMarker2DSquaredBinaryOpencv> myMarker = xpcf::utils::make_shared<SolARMarker2DSquaredBinaryOpencv>();
SRef<SolARDescriptorsExtractorSBPatternOpencv> mySBPatternExtractor = xpcf::utils::make_shared<SolARDescriptorsExtractorSBPatternOpencv>();
SRef<SolARCameraOpencv> myCamera = xpcf::utils::make_shared<SolARCameraOpencv>();
SRef<SolARImageConvertorOpencv> myImageConvertor = xpcf::utils::make_shared<SolARImageConvertorOpencv>();
SRef<SolARImageFilterOpencv> myImageFilter = xpcf::utils::make_shared<SolARImageFilterOpencv>();
SRef<SolARContoursExtractorOpencv> myContoursExtractor = xpcf::utils::make_shared<SolARContoursExtractorOpencv>();
SRef<SolARContoursFilterBinaryMarkerOpencv> myContoursFilter = xpcf::utils::make_shared<SolARContoursFilterBinaryMarkerOpencv>();
SRef<SolARPerspectiveControllerOpencv> myPerspectiveController = xpcf::utils::make_shared<SolARPerspectiveControllerOpencv>();
SRef<SolARDescriptorMatcherRadiusOpencv> myMatcher = xpcf::utils::make_shared<SolARDescriptorMatcherRadiusOpencv>();
SRef<SolARSBPatternReIndexer> myPatternReindexer = xpcf::utils::make_shared<SolARSBPatternReIndexer>();
SRef<SolARImage2WorldMapper4Marker2D> myImage2WorldMapper = xpcf::utils::make_shared<SolARImage2WorldMapper4Marker2D>();
SRef<SolARPoseEstimationOpencv> myPoseEstimation = xpcf::utils::make_shared<SolARPoseEstimationOpencv>();
SRef<SolAR3DOverlayOpencv> my3DOverlay = xpcf::utils::make_shared<SolAR3DOverlayOpencv>();
SRef<SolARImageViewerOpencv> myViewer = xpcf::utils::make_shared<SolARImageViewerOpencv>();
// ADD HERE: declarations of data structures used to connect components
// Example to declare a SolARImage:
// SRef<datastructure::Image> myImage;
SRef<datastructure::Image> myCamImage;
SRef<datastructure::Image> myGreyImage;
SRef<datastructure::Image> myBinaryImage;
SRef<datastructure::SquaredBinaryPattern> myMarkerPattern;
std::vector<SRef<datastructure::Contour2Df>> myContours;
std::vector<SRef<datastructure::Contour2Df>> myFilteredContours;
std::vector<SRef<datastructure::Image>> myPatches;
std::vector<SRef<datastructure::Contour2Df>> myRecognizedContours;
SRef<datastructure::DescriptorBuffer> myMarkerSBPatternsDescriptors;
SRef<datastructure::DescriptorBuffer> myCamSBPatternsDescriptors;
std::vector<datastructure::DescriptorMatch> myMatches;
std::vector<SRef<datastructure::Point2Df>> myMarkerCorners;
std::vector<SRef<datastructure::Point2Df>> myCamCorners;
std::vector<SRef<datastructure::Point3Df>> myWorldCorners;
datastructure::Pose myPose;
// Initialize your components before starting the pipeline loop
// ADD HERE: Find a way to load marker
if (myMarker->loadMarker(argv[1]) == SolAR::FrameworkReturnCode::_ERROR_)
{
LOG_ERROR("Cannot load marker");
return;
}
else
{
LOG_INFO("Marker loaded");
}
// ADD HERE : Find a way to extract squared binary pattern descriptors from the reference fiducial marker
myMarkerPattern = myMarker->getPattern();
mySBPatternExtractor->extract(myMarkerPattern, myMarkerSBPatternsDescriptors);
// ADD HERE : Launch the camera
if (myCamera->start(atoi(argv[3])) != FrameworkReturnCode::_SUCCESS) // Camera
{
LOG_ERROR("Camera with id {} does not exist", argv[3]);
return;
}
// ADD HERE : Load the calibration file of the camera
if (myCamera->loadCameraParameters(argv[2]) == SolAR::FrameworkReturnCode::_ERROR_)
{
LOG_INFO("Cannot load camera calibration file");
}
// ADD HERE : Initialization of the Contours Extractor
myContoursExtractor->setParameters(4);
// ADD HERE : Initialization of the Contours Filter
myContoursFilter->setParameters(20);
// ADD HERE : Initialization of the Perspective Controller
myPerspectiveController->setParameters({640,480});
// ADD HERE : Initialization of the Squared Binary Pattern Extractor
mySBPatternExtractor->setParameters(myMarkerPattern->getSize());
// ADD HERE : Initialization of the Pattern Reindexer
myPatternReindexer->setParameters(myMarkerPattern->getSize());
// ADD HERE : Initialize the image2World mapper
myImage2WorldMapper->setParameters({(uint32_t)myMarkerPattern->getSize(),(uint32_t)myMarkerPattern->getSize()}, myMarker->getSize());
// ADD HERE : Initialize the Pose Estimation component
myPoseEstimation->setCameraParameters(myCamera->getIntrinsicsParameters(), myCamera->getDistorsionParameters());
// ADD HERE : Initialize the 3D Overlay component
my3DOverlay->setCameraParameters(myCamera->getIntrinsicsParameters(), myCamera->getDistorsionParameters());
// The escape key to exit the sample
char escape_key = 27;
// The pipeline loop
while (true)
{
// ADD HERE : Get the last image returned by the camera
if (myCamera->getNextImage(myCamImage) == SolAR::FrameworkReturnCode::_ERROR_)
{
LOG_ERROR("Cannot get access to the image returned by the camera");
break;
}
// ADD HERE : the call to components to get your camera pose
myImageConvertor->convert(myCamImage, myGreyImage, Image::ImageLayout::LAYOUT_GREY);
myImageFilter->binarize(myGreyImage, myBinaryImage, -1, 255);
myContoursExtractor->extract(myBinaryImage, myContours);
myContoursFilter->filter(myContours, myFilteredContours);
myPerspectiveController->correct(myBinaryImage, myFilteredContours, myPatches);
if (mySBPatternExtractor->extract(myPatches, myFilteredContours, myCamSBPatternsDescriptors, myRecognizedContours) != FrameworkReturnCode::_ERROR_)
{
if (myMatcher->match(myMarkerSBPatternsDescriptors, myCamSBPatternsDescriptors, myMatches) == api::features::DescriptorMatcher::DESCRIPTORS_MATCHER_OK)
{
myPatternReindexer->reindex(myRecognizedContours, myMatches, myMarkerCorners, myCamCorners);
myImage2WorldMapper->map(myMarkerCorners, myWorldCorners);
myPoseEstimation->poseFromSolvePNP(myPose, myCamCorners, myWorldCorners);
my3DOverlay->drawBox(myPose, myMarker->getWidth(), myMarker->getHeight(), 0.1, Transform3Df::Identity(), myCamImage);
}
}
// ADD HERE : Draw a window with a box displayed over the fiducial marker
if (myViewer->display("AR Box", myCamImage, &escape_key) == FrameworkReturnCode::_STOP)
break;
}
}